Providence County
A Symplectic Analysis of Alternating Mirror Descent
Katona, Jonas, Wang, Xiuyuan, Wibisono, Andre
Motivated by understanding the behavior of the Alternating Mirror Descent (AMD) algorithm for bilinear zero-sum games, we study the discretization of continuous-time Hamiltonian flow via the symplectic Euler method. We provide a framework for analysis using results from Hamiltonian dynamics, Lie algebra, and symplectic numerical integrators, with an emphasis on the existence and properties of a conserved quantity, the modified Hamiltonian (MH), for the symplectic Euler method. We compute the MH in closed-form when the original Hamiltonian is a quadratic function, and show that it generally differs from the other conserved quantity known previously in that case. We derive new error bounds on the MH when truncated at orders in the stepsize in terms of the number of iterations, $K$, and use these bounds to show an improved $\mathcal{O}(K^{1/5})$ total regret bound and an $\mathcal{O}(K^{-4/5})$ duality gap of the average iterates for AMD. Finally, we propose a conjecture which, if true, would imply that the total regret for AMD scales as $\mathcal{O}\left(K^{\varepsilon}\right)$ and the duality gap of the average iterates as $\mathcal{O}\left(K^{-1+\varepsilon}\right)$ for any $\varepsilon>0$, and we can take $\varepsilon=0$ upon certain convergence conditions for the MH.
America's AI takeover: New map reveals US cities DOOMED to lose the most jobs to tech... is YOUR hometown at risk?
Artificial intelligence is taking over countless industries around the U.S., raising concerns among Americans who fear they will be replaced by the tech. Now, new research has revealed the most and least AI-proof cities across the nation, based on five key metrics including job availability, the state's population growth rate, and job diversity. Workers based in major tech hubs should look to large, coastal metropolitan areas if they want to avoid losing out to artificial intelligence, with Phoenix, Arizona coming in first as the most AI-proof city in the country. The report warned that Providence, Rhode Island is the top city most susceptible to AI-related job loss. A report revealed the best cities to move to if you want to avoid AI and the top cities you should consider moving away from.
Volta Medical VX1 AI Software to be Featured at Heart Rhythm 2022
MARSEILLE, France and PROVIDENCE, R.I., April 27, 2022 (GLOBE NEWSWIRE) -- Volta Medical, a pioneering medtech startup advancing novel artificial intelligence (AI) algorithms to treat cardiac arrhythmias, today announced it will participate at Heart Rhythm 2022, where Volta VX1 digital AI companion technology will be featured in several venues, including a poster session, podium presentation, Rhythm Theater program and the Volta exhibit booth. VX1 is a machine and deep learning-based algorithm designed to assist operators in the real-time manual annotation of 3D anatomical and electrical maps of the human atria during atrial fibrillation (AF) or atrial tachycardia. It is the first FDA cleared AI-based tool in interventional cardiac electrophysiology (EP). On Friday, April 29, VX1 will be highlighted in two scientific sessions: session DH-202, "Machine Learning Applications for Arrhythmia Detection and Treatment" from 10:30-11:30 a.m. Volta's Rhythm Theater presentation, "Can AI Solve the Persistent AF Paradigm?," will be held Saturday, April 30 from 10:00-11:00 a.m.
Deep Learning in High Dimension: Neural Network Approximation of Analytic Functions in $L^2(\mathbb{R}^d,\gamma_d)$
Schwab, Christoph, Zech, Jakob
To quantify DNN expression rates, we assume f to belong to a class of functions that allows holomorphic extensions to certain cartesian products of st rips around the real line in the complex plane. This implies summability results on coefficients in Wiener-H ermite polynomial chaos expansions of f . We separately discuss the finite dimensional case d N and the (countably) infinite dimensional case d . Our expression rate analysis is based on expressing such fu nctions through their finite-or infinite-parametric Wiener-Hermite polynomial chaos (gpc) expansi on. Reapproximating the gpc expansion, we provide DNN architectures and corresponding DNN size bound s which show that such functions can be approximated at an exponential convergence rate in finite di mension d N . For d, i.e. in the infinite dimensional case, our DNN expression rate bounds are free fr om the so-called curse of dimensionality: we prove that in this case our DNN expression rate bounds are o nly determined by the summability of the gpc expansion coefficient sequences. Thus, while we conce ntrate on analytic functions, the scope of our results extends to statistical learning of any object th at can be represented as a Wiener-Hermite expansion with bound on summability of the coefficient sequen ces. Relevance of the present investigation derives from the fac t that functions belonging to the above described class arise in particular as response maps in unce rtainty quantification (UQ) for partial differential equations (PDEs for short) with Gaussian random fi eld inputs. Modelling unknown inputs of elliptic or parabolic PDEs by a log-Gaussian random field, th e corresponding PDE response surface can under certain assumptions be shown to be of this type [5].
Facebook to delete users' facial-recognition data after privacy complaints
Facebook says it will delete facial recognition data on 1 billion people as it backs away from the technology. Critics had called it a danger to personal privacy. Facebook says it will delete facial recognition data on 1 billion people as it backs away from the technology. Critics had called it a danger to personal privacy. Providence, R.I. -- Facebook said it will shut down its face-recognition system and delete the faceprints of more than 1 billion people.
New report assesses progress and risks of artificial intelligence
PROVIDENCE, R.I. [Brown University] -- Artificial intelligence has reached a critical turning point in its evolution, according to a new report by an international panel of experts assessing the state of the field. Substantial advances in language processing, computer vision and pattern recognition mean that AI is touching people's lives on a daily basis -- from helping people to choose a movie to aiding in medical diagnoses. With that success, however, comes a renewed urgency to understand and mitigate the risks and downsides of AI-driven systems, such as algorithmic discrimination or use of AI for deliberate deception. Computer scientists must work with experts in the social sciences and law to assure that the pitfalls of AI are minimized. Those conclusions are from a report titled "Gathering Strength, Gathering Storms: The One Hundred Year Study on Artificial Intelligence (AI100) 2021 Study Panel Report," which was compiled by a panel of experts from computer science, public policy, psychology, sociology and other disciplines.
Shape-Preserving Dimensionality Reduction : An Algorithm and Measures of Topological Equivalence
We introduce a linear dimensionality reduction technique preserving topological features via persistent homology. The method is designed to find linear projection $L$ which preserves the persistent diagram of a point cloud $\mathbb{X}$ via simulated annealing. The projection $L$ induces a set of canonical simplicial maps from the Rips (or \v{C}ech) filtration of $\mathbb{X}$ to that of $L\mathbb{X}$. In addition to the distance between persistent diagrams, the projection induces a map between filtrations, called filtration homomorphism. Using the filtration homomorphism, one can measure the difference between shapes of two filtrations directly comparing simplicial complexes with respect to quasi-isomorphism $\mu_{\operatorname{quasi-iso}}$ or strong homotopy equivalence $\mu_{\operatorname{equiv}}$. These $\mu_{\operatorname{quasi-iso}}$ and $\mu_{\operatorname{equiv}}$ measures how much portion of corresponding simplicial complexes is quasi-isomorphic or homotopy equivalence respectively. We validate the effectiveness of our framework with simple examples.
Stochastic optimization with momentum: convergence, fluctuations, and traps avoidance
Barakat, A., Bianchi, P., Hachem, W., Schechtman, Sh.
In this paper, a general stochastic optimization procedure is studied, unifying several variants of the stochastic gradient descent such as, among others, the stochastic heavy ball method, the Stochastic Nesterov Accelerated Gradient algorithm (S-NAG), and the widely used Adam algorithm. The algorithm is seen as a noisy Euler discretization of a non-autonomous ordinary differential equation, recently introduced by Belotto da Silva and Gazeau, which is analyzed in depth. Assuming that the objective function is non-convex and differentiable, the stability and the almost sure convergence of the iterates to the set of critical points are established. A noteworthy special case is the convergence proof of S-NAG in a non-convex setting. Under some assumptions, the convergence rate is provided under the form of a Central Limit Theorem. Finally, the non-convergence of the algorithm to undesired critical points, such as local maxima or saddle points, is established. Here, the main ingredient is a new avoidance of traps result for non-autonomous settings, which is of independent interest.
Misinformation on coronavirus is proving highly contagious
PROVIDENCE, Rhode Island – As the world races to find a vaccine and a treatment for COVID-19, there is seemingly no antidote in sight for the burgeoning outbreak of coronavirus conspiracy theories, hoaxes, anti-mask myths and sham cures. The phenomenon, unfolding largely on social media, escalated this week when U.S. President Donald Trump retweeted a false video about an anti-malaria drug being a cure for the virus and it was revealed that Russian intelligence is spreading disinformation about the crisis through English-language websites. Experts worry the torrent of bad information is dangerously undermining efforts to slow the virus, whose death toll in the U.S. hit 150,000 Wednesday, by far the highest in the world, according to the tally kept by Johns Hopkins University. Over a half-million people have died in the rest of the world. Hard-hit Florida reported 216 deaths, breaking the single-day record it set a day earlier.
DeepONet: Learning nonlinear operators for identifying differential equations based on the universal approximation theorem of operators
Lu, Lu, Jin, Pengzhan, Karniadakis, George Em
DeepONet: Learning nonlinear operators for identifying differential equations based on the universal approximation theorem of operators Lu Lu 1, Pengzhan Jin 2, and George Em Karniadakis 1 1 Division of Applied Mathematics, Brown University, Providence, RI 02912, USA 2 LSEC, ICMSEC, Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing 100190, China Abstract While it is widely known that neural networks are universal approximators of continuous functions, a less known and perhaps more powerful result is that a neural network with a single hidden layer can approximate accurately any nonlinear continuous operator [5]. This universal approximation theorem is suggestive of the potential application of neural networks in learning nonlinear operators from data. However, the theorem guarantees only a small approximation error for a sufficient large network, and does not consider the important optimization and generalization errors. To realize this theorem in practice, we propose deep operator networks (DeepONets) to learn operators accurately and efficiently from a relatively small dataset. A DeepONet consists of two sub-networks, one for encoding the input function at a fixed number of sensorsx i,i 1,...,m (branch net), and another for encoding the locations for the output functions (trunk net). We perform systematic simulations for identifying two types of operators, i.e., dynamic systems and partial differential equations, and demonstrate that DeepONet significantly reduces the generalization error compared to the fully-connected networks. We also derive theoretically the dependence of the approximation error in terms of the number of sensors (where the input function is defined) as well as the input function type, and we verify the theorem with computational results. More importantly, we observe high-order error convergence in our computational tests, namely polynomial rates (from half order to fourth order) and even exponential convergence with respect to the training dataset size. 1 Introduction The universal approximation theorem states that neural networks can be used to approximate any continuous function to arbitrary accuracy if no constraint is placed on the width and depth of the hidden layers [7, 11]. However, another approximation result, which is yet more surprising and has not been appreciated so far, states that a neural network with a single hidden layer can approximate accurately any nonlinear continuous functional (a mapping from a space of functions into the real numbers) [3, 18, 25] or (nonlinear) operator (a mapping from a space of functions into another space of functions) [5, 4]. Before reviewing the approximation theorem for operators, we introduce some notation, which will be used through this paper. Let G be an operator taking an input functionu, and then G( u) is the corresponding output function.